Non-parametric expectation maximization: a learning automata approach

نویسندگان

  • Wael Abd-Almageed
  • Aly I. El-Osery
  • Christopher E. Smith
چکیده

The famous Expectation Maximization technique suffers two major drawbacks. First, the number of components has to be specified apriori. Also, the Expectation Maximization is sensitive to initialization. In this paper, we present a new stochastic technique for estimating the mixture parameters. Parzen Window is used to estimate a discrete estimate of the PDF of the given data. Stochastic Learning Automata is then used to select the mixture parameters that minimize the distance between the discrete estimate of the PDF and the estimate of the Expectation Maximization. The validity of the proposed approach is verified using bivariate simulation data.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Spatial pattern discovery by learning a probabilistic parametric model from multiple attributed relational graphs

This paper presents the methodology and theory for automatic spatial pattern discovery from multiple attributed relational graph samples. The spatial pattern is modelled as a mixture of probabilistic parametric attributed relational graphs. A statistic learning procedure is designed to learn the parameters of the spatial pattern model from the attributed relational graph samples. The learning p...

متن کامل

Non-parametric Policy Search with Limited Information Loss

Learning complex control policies from non-linear and redundant sensory input is an important challenge for reinforcement learning algorithms. Non-parametric methods that approximate values functions or transition models can address this problem, by adapting to the complexity of the data set. Yet, many current non-parametric approaches rely on unstable greedy maximization of approximate value f...

متن کامل

Distributed Submodular Maximization

Many large-scale machine learning problems – clustering, non-parametric learning, kernel machines, etc. – require selecting a small yet representative subset from a large dataset. Such problems can often be reduced to maximizing a submodular set function subject to various constraints. Classical approaches to submodular optimization require centralized access to the full dataset, which is impra...

متن کامل

A Multi-objective Memetic Optimization Ap- Proach to the Circular Antenna Array De- Sign Problem

The paper provides a novel approach to the design of nonuniform planar circular antenna arrays for achieving maximal side lobe level suppression and directivity. The current excitation amplitudes and phase perturbations of the array elements are determined using an Adaptive Memetic algorithm resulting from a synergy of Differential Evolution (DE) and Learning Automata that is able to significan...

متن کامل

Feature Extraction by Non-Parametric Mutual Information Maximization

We present a method for learning discriminative feature transforms using as criterion the mutual information between class labels and transformed features. Instead of a commonly used mutual information measure based on Kullback-Leibler divergence, we use a quadratic divergence measure, which allows us to make an efficient non-parametric implementation and requires no prior assumptions about cla...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2003